深度學習基礎知識
林嶔 (Lin, Chin)
\[\hat{y} = f(x)\]
\[\hat{y} = f(x) = b_{0} + b_{1}x\]
\[loss = diff(y, \hat{y})\] - 以簡單線性迴歸的損失函數為例,所求的值為殘差平方和,可將此式改寫為:
\[loss = diff(y, \hat{y}) = \sum \limits_{i=1}^{n} \left(y_{i} - \hat{y_{i}}\right)^{2}\]
\[loss = diff(y, f(x))\]
\[loss = diff(y, f(x)) = \sum \limits_{i=1}^{n} \left(y_{i} - \left(b_{0} + b_{1}x_{1,i}\right)\right)^{2}\]
\[min(loss)\]
\[f(x) = x^{2} + 2x + 1\] - 接著,我們對上述函數進行微分,並尋找微分後函數為0的位置,將可以知道此函數的極值位置:
\[\frac{\partial}{\partial x} f(x) = 2x + 2 = 0\]
\[x = -1\]
為什麼我們能夠利用『微分』求函數的極值?這邊大家可能要複習一下基本觀念,對一個『函數』進行『微分』所獲得的『導函數』其實就是該函數的『切線斜率函數』,而『切線斜率函數』等於0的位置就暗示著函數不經過一系列的上升/下降後停止變化,那當然這個位置就是極值所在。
然而,剛剛的求極值過程中有一個非常討厭的部分,那就是要求一個「一元一次方程式」,而當函數複雜一點,我們將要求「N元M次聯立方程式」的答案,那將會讓整個過程異常複雜,所以我們要尋求其他解決方案。
在這裡我們隆重介紹『梯度下降法』。首先我們要先定義何謂『梯度』?所謂的『梯度』其實就是『斜率』(注意,這個定義並不精確,但為了省略太多複雜的數學語言,我們暫且使用這個定義)。在這個定義之下,『梯度下降法』意思就是我們在『求解極值』的過程中,隨著『梯度』進行移動,從而找到極值的過程。
下面以找到剛剛那個函數「\(f(x)\)」的極值為例,我們先隨機指定一個起始值,並定義他是第0代:
\[x_{\left(epoch:0\right)} = 10\]
\[x_{\left(epoch:t\right)} = x_{\left(epoch:t - 1\right)} - lr \cdot \frac{\partial}{\partial x}f(x_{\left(epoch:t - 1\right)})\] - 由於剛剛函數的導函數為「\(2x + 2\)」,我們可以將式子帶入運算:
\[ \begin{align} x_{\left(epoch:1\right)} & = x_{\left(epoch:0\right)} - lr \cdot \frac{\partial}{\partial x}f(x_{\left(epoch:0\right)}) \\ & = 10 - lr \cdot \frac{\partial}{\partial x}f(10) \\ & = 10 - 0.05 \cdot (2\cdot10+2)\\ & = 8.9 \end{align} \]
\[ \begin{align} x_{\left(epoch:2\right)} & = x_{\left(epoch:1\right)} - lr \cdot \frac{\partial}{\partial x}f(x_{\left(epoch:1\right)}) \\ & = 8.9 - lr \cdot \frac{\partial}{\partial x}f(8.9) \\ & = 8.9 - 0.05 \cdot (2\cdot8.9+2)\\ & = 7.91 \end{align} \]
\[ \begin{align} x_{\left(epoch:3\right)} & = 7.91 - 0.891 = 7.019 \\ x_{\left(epoch:4\right)} & = 7.019 - 0.8019 = 6.2171 \\ x_{\left(epoch:5\right)} & = 6.2171 - 0.72171 = 5.49539 \\ & \dots \\ x_{\left(epoch:\infty\right)} & = -1 \end{align} \]
\[f(x) = x^{2}\]
original.fun = function(x) {
return(x^2)
}
differential.fun = function(x) {
return(2*x)
}
start.value = 5
learning.rate = 0.1
num.iteration = 1000
result.x = rep(NA, num.iteration)
for (i in 1:num.iteration) {
if (i == 1) {
result.x[1] = start.value
} else {
result.x[i] = result.x[i-1] - learning.rate * differential.fun(result.x[i-1])
}
}
print(tail(result.x, 1))
[1] 7.68895e-97
– 在使用梯度下降法時,原則上learning.rate不宜設置太大,但可以觀察收斂速度,若收斂速度太慢再適當的調整為佳。
– 預測函數
\[\hat{y_{i}} = f(x) = b_{0} + b_{1}x_{i}\]
– 損失函數
\[loss = diff(y, \hat{y}) = \frac{{1}}{2n}\sum \limits_{i=1}^{n} \left(y_{i} - \hat{y_{i}}\right)^{2}\]
– 整合預測函數與損失函數
\[loss = diff(y, f(x)) = \frac{{1}}{2n}\sum \limits_{i=1}^{n} \left(y_{i} - \left(b_{0} + b_{1}x_{i}\right)\right)^{2}\]
– \(b_0\)以及\(b_1\)的偏導函數
\[ \begin{align} \frac{\partial}{\partial b_0}loss & = \frac{{1}}{2n}\sum \limits_{i=1}^{n} \frac{\partial}{\partial \hat{y_{i}}} \left(y_{i} - \hat{y_{i}}\right)^{2} \cdot \frac{\partial}{\partial b_0} \hat{y_{i}}\\ & = \frac{1}{n} \sum \limits_{i=1}^{n}\left( \hat{y_{i}} - y_{i} \right) \cdot \frac{\partial}{\partial b_0} (b_{0} + b_{1}x_{i}) \\ & = \frac{1}{n} \sum \limits_{i=1}^{n}\left( \hat{y_{i}} - y_{i} \right) \\\\ \frac{\partial}{\partial b_1}loss & = \frac{1}{n} \sum \limits_{i=1}^{n}\left( \hat{y_{i}} - y_{i} \right) \cdot x_{i} \end{align} \]
\[ \begin{align} b_{0\left(epoch:t\right)} & = b_{0\left(epoch:t - 1\right)} - lr \cdot \frac{\partial}{\partial b_{0}}loss \\ b_{1\left(epoch:t\right)} & = b_{1\left(epoch:t - 1\right)} - lr \cdot \frac{\partial}{\partial b_{1}}loss \end{align} \]
– 請到這裡下載這份資料集
iris = read.csv('data/iris.csv')
x = iris[,3]
y = iris[,4]
original.fun = function(b0, b1, x = x, y = y) {
y.hat = b0 + b1 * x
return(sum(y.hat - y)^2/2/length(x))
}
differential.fun.b0 = function(b0, b1, x = x, y = y) {
y.hat = b0 + b1 * x
return(sum(y.hat - y)/length(x))
}
differential.fun.b1 = function(b0, b1, x = x, y = y) {
y.hat = b0 + b1 * x
return(sum((y.hat - y)*x)/length(x))
}
model = lm(y~x)
print(model)
##
## Call:
## lm(formula = y ~ x)
##
## Coefficients:
## (Intercept) x
## -0.3631 0.4158
lr = 0.05
num.iteration = 1000
ans_b0 = rep(0, num.iteration)
ans_b1 = rep(0, num.iteration)
for (i in 1:num.iteration) {
ans_b0[i+1] = ans_b0[i] - lr * differential.fun.b0(b0 = ans_b0[i], b1 = ans_b1[i], x = x, y = y)
ans_b1[i+1] = ans_b1[i] - lr * differential.fun.b1(b0 = ans_b0[i], b1 = ans_b1[i], x = x, y = y)
}
print(tail(ans_b0, 1))
[1] -0.3629967
print(tail(ans_b1, 1))
[1] 0.4157381
– 我們首先載入「mxnet」套件
library(mxnet)
資料:對於不同的預測函數需要指定不同的結構,你只要記住你想要用的部分即可
模型結構:負責定義預測函數
X.array = array(x, dim = c(1, length(x)))
Y.array = array(y, dim = length(y))
data = mx.symbol.Variable(name = 'data')
fc_layer = mx.symbol.FullyConnected(data = data, num.hidden = 1, name = 'fc_layer')
out_layer = mx.symbol.LinearRegressionOutput(data = fc_layer, name = 'out_layer')
mx.set.seed(0)
lr_model = mx.model.FeedForward.create(symbol = out_layer,
X = X.array, y = Y.array,
optimizer = "sgd", learning.rate = 0.05, momentum = 0,
array.batch.size = 20, num.round = 100,
ctx = mx.cpu(),
array.layout = "colmajor",
eval.metric = mx.metric.rmse)
lr_model
## $symbol
## C++ object <0x6a14360> of class 'MXSymbol' <0x77d4590>
##
## $arg.params
## $arg.params$fc_layer_weight
## [,1]
## [1,] 0.4242628
##
## $arg.params$fc_layer_bias
## [1] -0.3760956
##
##
## $aux.params
## list()
##
## attr(,"class")
## [1] "MXFeedForwardModel"
#Save model
mx.model.save(lr_model, "model/linear_regression", iteration = 0)
#Load model
lr_model = mx.model.load("model/linear_regression", iteration = 0)
new_X.array = array(3, dim = c(1, 1))
predict_Y = predict(lr_model, new_X.array, array.layout = "colmajor")
print(predict_Y)
## [,1]
## [1,] 0.8966927
– 有了梯度下降法的基礎之後,我們應該能了解神經網路只是比較複雜的預測函數,求解方式是類似的。
– 下一節課的重點就在於寫出更複雜的預測函數,並且能讓他適應圖片這種特殊的資料型態,至於梯度的求解及優化過程就全部交給MxNet,我們只要學會如何訓練出一個模型以及如何使用他預測即可!